The VC Dimension for Mixtures ofBinary Classi
نویسنده
چکیده
The mixtures-of-experts (ME) methodology provides a tool of classii-cation when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the Vapnik-Chervonenkis (VC) dimension of the mixtures-of-experts architecture is bounded below by the number of experts m, and is bounded above by O(m 4 s 2), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts.
منابع مشابه
Sample Complexity of Classi ers Taking Values in R, Application to Multi-Class SVMs
Bounds on the risk play a crucial role in statistical learning theory. They usually involve as capacity measure of the model studied the VC dimension or one of its extensions. In classi cation, such VC dimensions exist for models taking values in {0, 1}, [ [ 1, Q ] ], and R. We introduce the generalizations appropriate for the missing case, the one of models with values in R. This provides us w...
متن کاملA Training Algorithm for Optimal Margin Classi ers
A training algorithm that maximizes the mar gin between the training patterns and the de cision boundary is presented The technique is applicable to a wide variety of classi ac tion functions including Perceptrons polyno mials and Radial Basis Functions The ef fective number of parameters is adjusted auto matically to match the complexity of the prob lem The solution is expressed as a linear co...
متن کاملMetric Entropy and Minimax Risk in Classi cation
We apply recent results on the minimax risk in density esti mation to the related problem of pattern classi cation The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classi cation of future examples given the classi cation of previously seen examples We give an asymptotic characterization of the minimax risk in terms of the metric entropy p...
متن کاملA bound concerning the generalization ability of a certain class of learning algorithms
A classi er is said to have good generalization ability if it performs on test data almost as well as it does on the training data. The main result of this paper provides a su cient condition for a learning algorithm to have good nite sample generalization ability. This criterion applies in some cases where the set of all possible classi ers has in nite VC dimension. We apply the result to prov...
متن کاملBounding the Generalization Error of Neural Networks and Combined Classifiers
Recently, several authors developed a new approach to bounding the generalization error of complex classi-ers (of large or even innnite VC-dimension) obtained by combining simpler classiiers. The new bounds are in terms of the distributions of the margin of combined classiiers and they provide some theoretical explanation of generalization performance of large neu-We obtained new probabilistic ...
متن کامل